EN FR
EN FR


Project Team Pulsar


Overall Objectives
Contracts and Grants with Industry
Bibliography


Project Team Pulsar


Overall Objectives
Contracts and Grants with Industry
Bibliography


Section: New Results

Online Parameter Tuning for Object Tracking Algorithms

Participants : Duc Phu Chau, Monique Thonnat, François Brémond.

Many approaches have been proposed to track mobile objects in a scene. However the quality of tracking algorithms always depends on scene properties such as: mobile object density, contrast intensity, scene depth and object size. The selection of a tracking algorithm for an unknown scene becomes a hard task. Even when the tracker has appropriately selected, it is difficult to tune online its parameters to get the best performance.

Therefore we propose a new control approach for mobile object tracking. More precisely in order to cope with the tracking context variations, this approach learns how to tune the parameters of object appearance-based tracking algorithms. The tracking context of a video sequence is defined as a set of features: density of mobile objects, their occlusion level, their contrasts with regard to the background and their 2D areas. In an offline supervised learning phase, satisfactory tracking parameters are searched for each training video sequence. Then these video sequences are classified by clustering their contextual features. Each context cluster is associated with the learned tracking parameters. In the online control phase, two approaches are proposed. In the first one, once a context change is detected, the tracking parameters are tuned for the new context using the learned values. In the second approach, the parameter tuning is performed when the context changes and the tracking quality (computed by an online performance evaluation algorithm [56] ) is not good enough. An online learning process enables to update the context/parameter relations.

We have also proposed two new tracking algorithms to experiment the proposed control method. The first tracker relies on a Kalman filter and a global tracking which aims at fusing trajectories belonging to the same mobile object. This work has been published in [35] . The second tracker relies on the similarities of eight object descriptors (2D and 3D positions, area, shape ratio, HOG, color histogram, color covariance and dominant color) to build object trajectories. This work has been published in [34] .

Figure 10. (a) CARETEKER: Illustration of the Caretaker video; (b) CAVIAR: Illustration of the Caviar video
IMG/chau_caretaker.jpgIMG/chau_caviar.jpg
(a) CARETEKER(b) CAVIAR

The proposed controller has been experimented on a long, complex video belonging to the Caretaker European project (http://cordis.europa.eu/ist/kct/caretaker_synopsis.htm ) (see figure 10 (a)) and 26 videos of Caviar dataset (http://homepages.inf.ed.ac.uk/rbf/CAVIARDATA1/ ) (see figure 10 (b)). For the Caretaker video, when the controller is used, the tracking quality increases from 52% to 78%. For the Caviar dataset, the experimental results show that the tracking performance increases from 78.3% to 84.4% when using the controller. The tracking results on Caviar videos with the proposed controller are as good as the ones obtained with manual parameter tuning.